In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Visualize the German Traffic Signs Dataset. This is open ended, some suggestions include: plotting traffic signs images, plotting the count of each sign, etc. Be creative!
The pickled data is a dictionary with 4 key/value pairs:
# Load pickled data
import os
import pickle
import numpy as np
import tensorflow as tf
import cv2
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from scipy.ndimage.interpolation import rotate
from random import randint
is_labels_encode = False
# TODO: fill this in based on where you saved the training and testing data
training_file = 'traffic-signs-data/train.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
# ndarrays
X_data, y_data = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
print("Done loading")
unique_values, indices, counts = np.unique(y_data, return_index=True, return_counts=True)
for value, count in zip(unique_values, counts):
print(value, count)
for index in indices:
plt.imshow(X_data[index])
plt.show()
# Need to split validation set off first, prior to any data processing.
# Otherwise validation data will leak into the training set during the normalization process.
# Must shuffle training data prior to split!
# Will use 1/5 of current training set for validation set.
# That would give us a 60/15/25 split which is on the low end for training size, but the autogenerated data will
# augment the training data, which should alleviate the concern of too small training size.
# Get randomized datasets for training and validation
X_train, X_validation, y_train, y_validation = train_test_split(
X_data,
y_data,
test_size=0.2,
random_state=832289)
print("Validation set created")
### Preprocess the data here.
### Feel free to use as many code cells as needed.
## Create horizontally flipped augmented data
## Create randomly rotated augmented data
## TODO: Determine if there's an efficient way to rotate each image by a random angle,
## rather than all images by same random angle.
X_rotate_1 = rotate(X_train, angle=randint(-45,45), axes=(1,2), reshape=False)
X_rotate_2 = rotate(X_train, angle=randint(-45,45), axes=(1,2), reshape=False)
X_rotate_3 = rotate(X_train, angle=randint(-45,45), axes=(1,2), reshape=False)
X_rotate_4 = rotate(X_train, angle=randint(-45,45), axes=(1,2), reshape=False)
## TODO: Zoom, Stretching, Color Perbutation
#orig_image = X_train[0]
#rotate_image = X_rotate_1[0]
#plt.imshow(orig_image)
#plt.show()
#plt.imshow(rotate_image)
#plt.show()
X_train = np.vstack((X_train, X_rotate_1, X_rotate_2, X_rotate_3, X_rotate_4))
y_train = np.concatenate((y_train, y_train, y_train, y_train, y_train))
y_train.shape
if(y_train[0] == y_train[31367]):
print("Copying successful")
# Normalization Method 1: Compute mean/stddev per pixel and channel across the entire batch.
# Use these values to normalize
def normalize_dataset(data_set):
return (data_set - np.mean(data_set, axis=0)) / (np.std(data_set, axis=0) + 1e-8)
# Normalization Method 2: Compute mean/stddev per image, normalize each image that way.
def normalize_dataset_alt(data_set):
return (data_set - np.mean(data_set, axis=(1,2,3), keepdims=True)) / (np.std(data_set, axis=(1,2,3), keepdims=True) + 1e-8)
def normalize_grayscale(X):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = 0.1
b = 0.9
return a + ((X - X.min(axis=0))*(b-a))/(X.max(axis=0) - X.min(axis=0))
# Convert image to grayscale
def rgb2gray(X):
X_gray = np.tensordot(X, [0.299, 0.587, 0.114], axes=([3],[0]))
return normalize_grayscale(X_gray)
# Convert to grayscale
# X_train = rgb2gray(X_train)
# X_validation = rgb2gray(X_validation)
# X_test = rgb2gray(X_test)
# Reshape from (num_samples, 32, 32) -> (num_samples, 32, 32, 1)
# X_train = X_train[:,:,:, None]
# X_validation = X_validation[:,:,:, None]
# X_test = X_test[:,:,:, None]
# Normalize grayscale intensities
X_train = normalize_dataset_alt(X_train)
X_validation = normalize_dataset_alt(X_validation)
X_test = normalize_dataset_alt(X_test)
print(X_train.shape)
print("Done with normalization.")
from sklearn.preprocessing import LabelBinarizer
if not is_labels_encode:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(y_train)
y_train = encoder.transform(y_train)
y_validation = encoder.transform(y_validation)
y_test = encoder.transform(y_test)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
y_train = y_train.astype(np.float32)
y_validation = y_validation.astype(np.float32)
y_test = y_test.astype(np.float32)
is_labels_encode = True
print('Labels One-Hot Encoded')
### Save Generated, Processed Data To Local File Cache
### Can't save entire training set due to bug, so splitting training data.
### http://stackoverflow.com/questions/31468117/python-3-can-pickle-handle-byte-objects-larger-than-4gb
directory = "processed_data/"
def saveData(pickle_filename, data):
if not os.path.isfile(directory + pickle_filename):
print('Saving data to pickle file...')
try:
with open(directory + pickle_filename, 'wb') as pfile:
pickle.dump(data, pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_filename, ':', e)
raise
print('Data cached in pickle file.')
for i, X_train_n in enumerate(np.split(X_train, 7)):
saveData("X_train" + str(i) + ".pickle", X_train_n)
saveData("y_train.pickle", y_train)
saveData('valid_test.pickle', {'X_validation': X_validation, 'X_test': X_test, 'y_validation': y_validation, 'y_test': y_test})
print("All data cached")
Start from here as augmented, preprocessed data is saved in local file.
import math
import pickle
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tqdm import tqdm
directory = "processed_data/"
# Reload the data
def load_data(filename):
with open(directory + filename, 'rb') as f:
return pickle.load(f)
X_train = np.vstack([load_data('X_train' + str(i) + '.pickle') for i in range(7)])
y_train = load_data('y_train.pickle')
pickle_data = load_data('valid_test.pickle')
X_validation = pickle_data['X_validation']
y_validation = pickle_data['y_validation']
X_test = pickle_data['X_test']
y_test = pickle_data['y_test']
del pickle_data # Free up memory
print('Data and modules loaded.')
### To start off let's do a basic data summary.
# number of training examples
n_train = len(y_train)
# number of validation examples
n_validation = len(y_validation)
# number of testing examples
n_test = len(y_test)
# what's the shape of an image?
image_shape = X_train[0].shape
# how many classes are in the dataset
n_classes = y_train.shape[1]
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
import tensorflow as tf
# Input [32 x 32 x 3] - Weight []
# Conv 1a [32 x 32 x 64] - Weight - [3 x 3 x 3 x 64]
# Conv 1b [32 x 32 x 64] - Weight - [3 x 3 x 64 x 64]
# Maxpool [16 x 16 x 64] - Weight - []
# Conv 2a [16 x 16 x 128] - Weight - [3 x 3 x 64 x 128]
# Conv 2b [16 x 16 x 128] - Weight - [3 x 3 x 128 x 128]
# Maxpool [8 x 8 x 128] - Weight - []
# Conv 3a [8 x 8 x 256] - Weight - [3 x 3 x 128 x 256]
# Conv 3b [8 x 8 x 256] - Weight - [3 x 3 x 256 x 256]
# Maxpool [4 x 4 x 256] - Weight - []
# FC 1 [1 x 1 x 512] - Weight - [4 x 4 x 256 x 512]
# Out [1 x 1 x 43] - Weight - [512 x 43]
# Total Weights = 3,263,680
layer_depth = {
'layer_1': 64,
'layer_2': 128,
'layer_3': 256,
'fully_connected': 512
}
weights = {
'layer_1a': tf.Variable(tf.truncated_normal([3, 3, 3, layer_depth['layer_1']], stddev=1e-2)),
'layer_1b': tf.Variable(tf.truncated_normal([3, 3, layer_depth['layer_1'], layer_depth['layer_1']], stddev=1e-2)),
'layer_2a': tf.Variable(tf.truncated_normal([3, 3, layer_depth['layer_1'], layer_depth['layer_2']], stddev=1e-2)),
'layer_2b': tf.Variable(tf.truncated_normal([3, 3, layer_depth['layer_2'], layer_depth['layer_2']], stddev=1e-2)),
'layer_3a': tf.Variable(tf.truncated_normal([3, 3, layer_depth['layer_2'], layer_depth['layer_3']], stddev=1e-2)),
'layer_3b': tf.Variable(tf.truncated_normal([3, 3, layer_depth['layer_3'], layer_depth['layer_3']], stddev=1e-2)),
'fully_connected': tf.Variable(tf.truncated_normal([4*4*256, layer_depth['fully_connected']], stddev=1e-2)),
'out': tf.Variable(tf.truncated_normal([layer_depth['fully_connected'], n_classes], stddev=1e-2))
}
biases = {
'layer_1a': tf.Variable(tf.zeros(layer_depth['layer_1'])),
'layer_1b': tf.Variable(tf.zeros(layer_depth['layer_1'])),
'layer_2a': tf.Variable(tf.zeros(layer_depth['layer_2'])),
'layer_2b': tf.Variable(tf.zeros(layer_depth['layer_2'])),
'layer_3a': tf.Variable(tf.zeros(layer_depth['layer_3'])),
'layer_3b': tf.Variable(tf.zeros(layer_depth['layer_3'])),
'fully_connected': tf.Variable(tf.zeros(layer_depth['fully_connected'])),
'out': tf.Variable(tf.zeros(n_classes))
}
# Dropout Probability
keep_prob = tf.placeholder(tf.float32)
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME')
def dropout(x, keep_probability):
return tf.nn.dropout(x, keep_probability)
# Create model
def conv_net(x, weights, biases):
# Layer 1 - 32*32*3 to 16*16*64
conv1a = conv2d(x, weights['layer_1a'], biases['layer_1a'])
conv1b = conv2d(conv1a, weights['layer_1b'], biases['layer_1b'])
conv1 = maxpool2d(conv1b)
conv1 = dropout(conv1, keep_prob)
# Layer 2 - 16*16*64 to 8*8*128
conv2a = conv2d(conv1, weights['layer_2a'], biases['layer_2a'])
conv2b = conv2d(conv2a, weights['layer_2b'], biases['layer_2b'])
conv2 = maxpool2d(conv2b)
conv2 = dropout(conv2, keep_prob)
# Layer 3 - 8*8*128 to 4*4*256
conv3a = conv2d(conv2, weights['layer_3a'], biases['layer_3a'])
conv3b = conv2d(conv3a, weights['layer_3b'], biases['layer_3b'])
conv3 = maxpool2d(conv3b)
conv3 = dropout(conv3, keep_prob)
# Fully connected layer - 4*4*256 to 512
# Reshape conv3 output to fit fully connected layer input
fc1 = tf.reshape(
conv3,
[-1, weights['fully_connected'].get_shape().as_list()[0]])
fc1 = tf.add(
tf.matmul(fc1, weights['fully_connected']),
biases['fully_connected'])
fc1 = tf.nn.tanh(fc1)
fc1 = dropout(fc1, keep_prob)
# Output Layer - class prediction - 512 to 10
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# Parameters
learning_rate = 0.0001
batch_size = 256
training_epochs = 100
# TF Graph input
x = tf.placeholder("float", [None, 32, 32, 3])
y = tf.placeholder("float", [None, n_classes])
# Output logits and predictions from net
logits = conv_net(x, weights, biases)
prediction = tf.nn.softmax(logits)
# Cross Entropy and Loss
cross_entropy = -tf.reduce_sum(y * tf.log(prediction), reduction_indices=1)
loss = tf.reduce_mean(cross_entropy)
# Accuracy
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
# Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
# Initializing the variables
init = tf.initialize_all_variables()
def get_random_train_batch():
train_perm = np.arange(500)
np.random.shuffle(train_perm)
X_train_acc = X_train[train_perm]
y_train_acc = y_train[train_perm]
return X_train_acc, y_train_acc
# Feed dicts for accuracy
valid_feed_dict = {x: X_validation, y: y_validation, keep_prob: 1.0}
test_feed_dict = {x: X_test, y: y_test, keep_prob: 1.0}
# The accuracy measured against the validation set
best_validation_accuracy = 0.0
# Number of iterations of no validation accuracy improvement before stopping early
early_stop_num = 1000
# Last iteration that the validation accuracy improved
last_improvement_iteration = 0
total_iterations = 0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
train_acc_batch = []
valid_acc_batch = []
# Create Saver
saver = tf.train.Saver()
save_path = "model/best_model"
# Launch the graph
with tf.Session() as sess:
sess.run(init)
batch_count = int(math.ceil(n_train/batch_size))
# Training cycle
for epoch in range(training_epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch+1, training_epochs), unit='batches')
perm = np.arange(n_train)
np.random.shuffle(perm)
X_train = X_train[perm]
y_train = y_train[perm]
# Loop over all batches
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_X = X_train[batch_start : batch_start + batch_size]
batch_y = y_train[batch_start : batch_start + batch_size]
# Run optimization op (backprop) and cost op (to get loss value)
_ = sess.run(optimizer, feed_dict={x: batch_X, y: batch_y, keep_prob: 0.5})
total_iterations += 1
# Log every 100 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
X_train_acc, y_train_acc = get_random_train_batch()
training_accuracy = sess.run(accuracy, feed_dict={x: X_train_acc, y: y_train_acc, keep_prob: 1.0})
validation_accuracy = sess.run(accuracy, feed_dict=valid_feed_dict)
# If validation accuracy is an improvement over best seen.
if validation_accuracy > best_validation_accuracy:
# Update best seen validation accuracy
best_validation_accuracy = validation_accuracy
# Update last iteration to show validation accuracy improvement
last_improvement_iteration = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=sess, save_path=save_path)
print("Improved Validation Accuracy: ", validation_accuracy)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
if total_iterations - last_improvement_iteration > early_stop_num:
print("No improvement found in 1000 batches, stopping optimization.")
break
print("Optimization Finished!")
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'b', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
# Improved Validation Accuracy: 0.996684
# Class used to save and/or restore Tensor Variables
saver = tf.train.Saver()
save_path = "model/best_model"
with tf.Session() as sess:
# Load the weights and bias
saver.restore(sess, save_path)
test_accuracy = sess.run(accuracy, feed_dict=test_feed_dict)
print("Test Accuracy: ", test_accuracy)
Best Validation Accuracy: 0.996684 Test Accuracy: 0.958591
Describe the techniques used to preprocess the data.
Answer: I tested 3 methods for normalizing the images:
1. Compute mean/stddev per pixel and channel across the entire batch. Subtract mean, divide by stddev for each pixel/channel.
2. Compute mean/stddev per image. Subtract mean, divide by stddev for each pixel/channel.
3. Convert each image to grayscale, perform min/max normalization, so each pixel has intensity in [-0.5, 0.5].
I was able to implement each of these methods efficiently, and didn't see a large difference in performance on a small subset I tested on, except a very small improvement on method 2.
I couldn't conclude that method 2 would actually perform better on the overall training set, but given my limited computing power, I decided to go forward with method 2 for actual training.
Looking back, I would test adding CLAHE processing to normalize brightness.
Describe how you set up the training, validation and testing data for your model. If you generated additional data, why?
Answer:
The test set was initially separate on load, so I left that alone. The training data was split 80/20 into a training and validation set. Prior to data augmentation, I had an approximate 60/15/25 split between training, validation, and test sets respectively. Note that this prior to data augmentation, so the training set is higher after accounting for that.
For data augmentation, I only augmented the training data set. For obvious reasons I didn't want to touch the test set in any way. As for the validation set, I didn't want to perform data augmentation prior to the training-validation set split because that would have leaked validation data into the training set. I also didn't see a benefit of adding validation augmentation after the training-validation split, as I wanted the validation set to model the test set as closely as possible. For creating the augmented data, I used only random rotations of the image (between -45 and 45 degrees). I had also considered adding horizontal flips, but given that there were left and right turn signs, this seemed ill advised.
Looking back, I would likely add zooming (in and out) and skewing of the images for data augmentation. Another consideration would be to add different brightnesses of the images as augmented data.
Data Breakdown:
Number of training examples = 156835 Number of validation examples = 7842 Number of testing examples = 12630
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.
Answer:
Total Weights = 3,263,680
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer:
What approach did you take in coming up with a solution to this problem?
Answer: I broke the project down into phases:
I used VGG Net as inspiration for my model, using multiple conv layers prior to pooling, in a pyramid shape. When it reached it's current form, it took ~14 hours to train on my laptop, and I'd reached over 99.5% validation accuracy. Adding any more complexity would have been computationally prohibitive, so I moved forward with final testing, which reached > 95% accuracy.
Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
import csv
def get_sign_names():
sign_names = dict()
with open('signnames.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
next(reader) # skip first line
for row in reader:
sign_names[int(row[0])] = row[1]
return sign_names
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It would be helpful to plot the images in the notebook.
Answer:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import cv2
import glob
def load_images_and_labels(folder, plot=False):
new_image_path = folder + "/*.jpg"
new_images = []
for img in glob.glob(new_image_path):
new_images.append(cv2.imread(img))
# Convert BGR to RGB and resize image to 32x32x3 and plot image
resized_images = []
for img in new_images:
b,g,r = cv2.split(img) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
resized_img = cv2.resize(rgb_img, (32, 32), interpolation=cv2.INTER_AREA)
if plot:
plt.imshow(resized_img)
plt.show()
resized_images.append(resized_img)
# Load Labels
with open(folder + '/labels.csv') as f:
label_reader = csv.reader(f)
row = next(label_reader)
labels = [int(label) for label in row]
return resized_images, labels
from sklearn.preprocessing import LabelBinarizer
def one_hot_encode(labels):
encoder = LabelBinarizer()
encoder.fit([i for i in range(43)])
return encoder.transform(labels)
sign_names = get_sign_names()
X_signs_unprocessed, labels = load_images_and_labels('new_signs', plot=True)
X_signs = normalize_dataset_alt(X_signs_unprocessed)
y_signs = one_hot_encode(labels)
I tried to choose images that were photos of signs instead of just being signs on a white background. This should show how well the model predicts signs it would expect to see in the real world. Some other differences that may cause issues are:
1. The Speed Limit sign is from North America, so it's square instead of round.
2. The No Passing sign is slightly obscured by vegetation.
3. The deer on the Wild Animal Crossing sign is flipped horizontally from what is seen in the training set.
Is your model able to perform equally well on captured pictures when compared to testing on the dataset?
Answer:
On the 5 images I pulled from online, I achieved an accuracy rate of 80%. Given the small sample size, and the fact that the second highest probability classification was correct on the missed sign, it's hard to say that the model performed equally well. Given some of the differences however between the new signs and the existing sign data (e.g. 30 mph vs 30 km/h, deer was flipped on wild animal sign) the model performed quite well.
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
save_path = "model/best_model"
saver = tf.train.Saver()
top_5_predictions = tf.nn.top_k(prediction, k=5)
def get_accuracy_on_new_images(X_signs, y_signs):
with tf.Session() as sess:
sess.run(init)
# Load the weights and bias
saver.restore(sess, save_path)
# Stop Sign
return sess.run(accuracy, feed_dict={x: X_signs, y: y_signs, keep_prob: 1.0})
print(get_accuracy_on_new_images(X_signs, y_signs))
Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)
Answer:
The model is able to confidently predict theh stop sign, 30 mph (even though it was trained for 30 km/h), the left turn ahead sign, and the no passing sign. The one sign it failed to correctly predict was the Wild Animal Crossing sign, which was the 2nd place prediction. The wild animal crossing sign I downloaded was flipped horizontally compared to the training sign.
One thing I noticed is that scaling down the signs to 32x32 significantly blurred the defining features of the sign. It seems that if we had trained on higher resolution images and test images, we likely would have had significantly better results, though it also would have required a more complex and more difficult to train model.
def plot_results(sign_num, sign_indices, sign_values, sign_names, X_signs_unprocessed):
# Sign Name
print("Sign: ", sign_names[labels[sign_num]])
# Plot Graph
plt.subplot(1,2,1)
top_sign_names = [sign_names[ind] for ind in sign_indices]
y_pos = np.arange(len(top_sign_names))
plt.barh(y_pos, sign_values, align='center', alpha=0.4)
plt.gca().invert_yaxis()
plt.yticks(y_pos, top_sign_names)
plt.xlabel('Confidence')
# Plot Image
plt.subplot(1,2,2)
plt.imshow(X_signs_unprocessed[sign_num])
plt.show()
print("\n")
def show_top_predictions(X_signs, y_signs, X_signs_unprocessed, sign_names):
saver = tf.train.Saver()
with tf.Session() as sess:
# Load the weights and bias
saver.restore(sess, save_path)
values, indices = sess.run(top_5_predictions, feed_dict={x: X_signs, y: y_signs, keep_prob: 1.0})
for sign_num in range(len(values)):
plot_results(sign_num, indices[sign_num], values[sign_num], sign_names, X_signs_unprocessed)
show_top_predictions(X_signs, y_signs, X_signs_unprocessed, sign_names)
If necessary, provide documentation for how an interface was built for your model to load and classify newly-acquired images.
Answer: Run the code in the "Model" section to define the model as well as the "Data Processing" section to define the normalization methods used.
Put jpg images in a folder of your choice, along with a file called 'labels.csv'. Labels.csv should be the classId labels, on a single line, separated by commas, in alphabetical order based upon the image filenames.
Pass the path to this folder to the method defined below.
def classify_new_images(path):
sign_names = get_sign_names()
X_signs_unprocessed, labels = load_images_and_labels(path)
X_signs = normalize_dataset_alt(X_signs_unprocessed)
y_signs = one_hot_encode(labels)
print("Accuracy: ", get_accuracy_on_new_images(X_signs, y_signs))
show_top_predictions(X_signs, y_signs, X_signs_unprocessed, sign_names)
classify_new_images('new_signs')
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.